Goto

Collaborating Authors

 nearest neighbor density functional estimator


Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators

Neural Information Processing Systems

We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k as the sample size n) into the functional of interest, the estimators we consider fix k and perform a bias correction. This can be more efficient computationally, and, as we show, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees.


Reviews: Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators

Neural Information Processing Systems

There are some issues with the conditions imposed on distributions in this paper. In particular, regarding the local bound hypothesis: in the proof of Lemma 2 epsilon may depend on x, so p * will be smaller than stated. As an example, if p(x) goes to 0 as x a when x goes to 0, then p *(x) goes to 0 as x (a 1). AFTER REBUTTAL: 1) The rebuttal here is helpful, but there seem to still be some issues with the local lower bounds hypothesis. They claim that it is very mild in light of Lemma 2, but Lemma 2 doesn't apply very well to the situation of p x a. I think that they should at least include what they've written in the rebuttal.


Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators

Singh, Shashank, Poczos, Barnabas

Neural Information Processing Systems

We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k as the sample size n) into the functional of interest, the estimators we consider fix k and perform a bias correction. This can be more efficient computationally, and, as we show, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. Papers published at the Neural Information Processing Systems Conference.